0%

(AAAI 2018) End-to-End United Video Dehazing and Detection

Li B, Peng X, Wang Z, et al. End-to-end united video dehazing and detection[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.



1. Overview


1.1. Motivation

  • end-to-end video dehazing has not been explored
  • temporal consistency

In this paper, it proposed

  • EVD-Net (End-to-End Video Dehazing Network)
  • EVDD-Net (End-to-End United Video Dehazing and Detection Network)
  • Classical Atmosphere Scattering Model
  • DehazeNet
  • MSCNN
    • AOD-Net. re-formulation
      Video Dehazing

1.3. Dataset

1.3.1. Dehazing

  • Synthetic Hazy Video Dataset from TUM RGB-D Dataset
  • refined depth infomation
  • TestSet. video from city road when PM2.5 is 223

1.3.2. Dehazing + Detection

  • ILSVRC2015 VID
  • estimated depth by paper 2016

1.4. Start Point

based on AOD-Net architecture.



1.5. EVD-Net

  • Multi-frame
    the global atmosphere light A should be hardly or slowly changed over a moderate number of consecutive frame.


1.6. EVDD-Net

  • Multi-Frame Faster RCNN
  • EVD-Net




2. Experiments


2.1. Details

  • MSE loss well aligned with SSIM and visual quality

2.2. Fusion Strategy



2.3. Comparison of Dehaze



2.4. Comparison of Detection

  • naive concatenation of low-level and high-level models often can not sufficiently boost the high-level task performance
  • JAOD-Faster RCNN. flickering and inconsistent detection


  • Only EVDD-Net detection 4 cars in all frames